对图形卷积网络(GCN)的兴趣激增,已经产生了数千种GCN变体,每年引入数百种。相比之下,许多GCN模型仅重复使用少数基准数据集,因为人们的兴趣图(例如社交或商业网络)都是专有的。我们提出了一个新的图生成问题,以使源图分布之后,为GCN生成各种基准图(可能是专有的),具有三个要求:1)基准有效性作为GCN研究源图的替代品, 2)可扩展性处理大型现实图形,以及3)最终用户的隐私保证。借助新的图形编码方案,我们将大规模的图生成问题重新构架为中长长序列生成问题,并将变压器体系结构的强生成功率应用于图形域。跨大量图生成模型进行的广泛实验表明,我们的模型可以成功生成基准图,并具有实际的图形结构,节点属性和基准GCNS在节点分类任务上所需的节点标签。
translated by 谷歌翻译
TensorFlow GNN(TF-GNN)是张量曲线的图形神经网络的可扩展库。它是从自下而上设计的,以支持当今信息生态系统中发生的丰富的异质图数据。Google的许多生产模型都使用TF-GNN,最近已作为开源项目发布。在本文中,我们描述了TF-GNN数据模型,其KERAS建模API以及相关功能,例如图形采样,分布式训练和加速器支持。
translated by 谷歌翻译
从社会或商业平台等工业生态系统连续发出的数据通常表示为由多种节点/边缘类型组成的异质图(HG)。使用称为异质图神经网络(HGNN)的HGS的最先进的图形学习方法用于学习深层上下文信息形式表示。但是,来自工业应用程序的许多HG数据集都遭受节点类型之间的标签失衡。由于没有直接学习使用扎根于不同节点类型的标签的直接方法,因此HGNN仅应用于具有丰富标签的几个节点类型。我们为HGNN提出了一个称为知识转移网络(KTN)的零射击传输学习模块,该模块通过HG中给出的丰富关系信息将知识从标签的源节点类型转移到零标记的节点类型。 KTN源自我们在这项工作中引入的理论关系,在HGNN模型中给出的每个节点类型的不同特征提取器之间。 KTN将6种不同类型的HGNN模型的性能提高了960%,以推断零标记的节点类型,并且在HGS上的18个不同的转移学习任务中,最高的最先进的转移学习基线胜过最高的最高转移学习基线。
translated by 谷歌翻译
尽管图形神经网络(GNNS)领域的进步,但目前仅使用少量数据集来评估新模型。这种持续依赖少数数据集提供了对模型之间的性能差异的最小见解,对于可能具有与用作学术基准的数据集有很大不同的工业从业人员而言,尤其具有挑战性。在Google在GNN基础架构和开源软件方面的工作过程中,我们试图开发改进的基准,这些基准可健壮,可调,可扩展且可推广。在这项工作中,我们介绍了GraphWorld,这是一种新的方法和系统,用于对任何可疑的GNN任务进行任意大量的合成图种群进行基准测试GNN模型。 GraphWorld允许用户有效地生成具有数百万个统计上不同数据集的世界。它可访问,可扩展且易于使用。 GraphWorld可以在没有专门硬件的情况下在一台计算机上运行,​​也可以轻松地扩展到在任意群集或云框架上运行。使用GraphWorld,用户对Graph Generator参数具有细粒度的控制,并且可以使用内置的超参数调整基准测试任意GNN模型。我们从GraphWorld实验中介绍了有关数以百亿个基准数据集中数以万计的GNN模型的性能特征的见解。我们进一步表明,GraphWorld有效地探索了标准基准测试的基准数据集空间区域,从而揭示了在历史上无法获得的模型之间的比较。使用GraphWorld,我们还能够研究图形属性与任务性能指标之间的关系,这对于经典的现实基准集合而言,这几乎是不可能的。
translated by 谷歌翻译
在许多科学应用中出现了从一组共同样本中获得两种(或更多)类型的测量的数据集。此类数据的探索性分析中的一个常见问题是识别有密切相关的不同数据类型的特征组。 Bimodule是来自两种数据类型的特征集的一对(A,B),因此A和B中的特征之间的汇总相关很大。如果A与B中的特征显着相关的特征集合,则BIMODULE(A,B)是稳定的,反之亦然。在本文中,我们提出并研究了基于迭代测试的程序(BSP),以识别Bi-View数据中稳定的双模型。我们进行了一项彻底的模拟研究,以评估BSP的性能,并使用GTEX项目的最新数据提出了表达定量性状基因座(EQTL)分析问题的扩展应用。此外,我们将BSP应用于气候数据,以确定北美地区年温度变化影响降水的区域。
translated by 谷歌翻译
图形神经网络(GNN)已在许多图分析任务(例如节点分类和链接预测)上实现了最新结果。然而,事实证明,图形群集等图形上的重要无监督问题对GNN的进步具有更大的抵抗力。图群集的总体目标与GNN中的节点合并相同 - 这是否意味着GNN池方法在聚类图上做得很好?令人惊讶的是,答案是没有的 - 当前的GNN合并方法通常无法恢复群集结构,而在简单的基线(例如应用于学习的表示形式上的K-均值)良好工作的情况下。我们通过仔细设计一组实验来进一步研究,以研究图形结构和属性数据中的不同信噪比情景。为了解决这些方法在聚类中的性能不佳,我们引入了深层模块化网络(DMON),这是一种受群集质量模块化量度启发的无监督池方法,并显示了它如何解决现实世界图的挑战性聚类结构的恢复。同样,在现实世界中,我们表明DMON产生的高质量簇与地面真相标签密切相关,从而实现了最先进的结果,比不同指标的其他合并方法提高了40%以上。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译
In this paper we derive a PAC-Bayesian-Like error bound for a class of stochastic dynamical systems with inputs, namely, for linear time-invariant stochastic state-space models (stochastic LTI systems for short). This class of systems is widely used in control engineering and econometrics, in particular, they represent a special case of recurrent neural networks. In this paper we 1) formalize the learning problem for stochastic LTI systems with inputs, 2) derive a PAC-Bayesian-Like error bound for such systems, 3) discuss various consequences of this error bound.
translated by 谷歌翻译